事实证明,在强化学习中使用人类示范可以显着提高剂性能。但是,任何要求人手动“教”该模型的要求与强化学习的目标有些相反。本文试图通过使用通过简单使用的虚拟现实模拟收集的单个人类示例来帮助进行RL培训,以最大程度地减少人类参与学习过程的参与,同时仍保留了绩效优势。我们的方法增加了一次演示,以产生许多类似人类的演示,与深层确定性的政策梯度和事后的经验重播(DDPG + HER)相结合时,可以显着改善对简单任务的训练时间,并允许代理商解决复杂的任务(Block Block堆叠)DDPG +她一个人无法解决。该模型使用单个人类示例实现了这一重要的训练优势,需要少于一分钟的人类输入。
translated by 谷歌翻译
在探索中,由于当前的低效率而引起的强化学习领域,具有较大动作空间的学习控制政策是一个具有挑战性的问题。在这项工作中,我们介绍了深入的强化学习(DRL)算法呼叫多动作网络(MAN)学习,以应对大型离散动作空间的挑战。我们建议将动作空间分为两个组件,从而为每个子行动创建一个值神经网络。然后,人使用时间差异学习来同步训练网络,这比训练直接动作输出的单个网络要简单。为了评估所提出的方法,我们在块堆叠任务上测试了人,然后扩展了人类从Atari Arcade学习环境中使用18个动作空间的12个游戏。我们的结果表明,人的学习速度比深Q学习和双重Q学习更快,这意味着我们的方法比当前可用于大型动作空间的方法更好地执行同步时间差异算法。
translated by 谷歌翻译
Multi-class ensemble classification remains a popular focus of investigation within the research community. The popularization of cloud services has sped up their adoption due to the ease of deploying large-scale machine-learning models. It has also drawn the attention of the industrial sector because of its ability to identify common problems in production. However, there are challenges to conform an ensemble classifier, namely a proper selection and effective training of the pool of classifiers, the definition of a proper architecture for multi-class classification, and uncertainty quantification of the ensemble classifier. The robustness and effectiveness of the ensemble classifier lie in the selection of the pool of classifiers, as well as in the learning process. Hence, the selection and the training procedure of the pool of classifiers play a crucial role. An (ensemble) classifier learns to detect the classes that were used during the supervised training. However, when injecting data with unknown conditions, the trained classifier will intend to predict the classes learned during the training. To this end, the uncertainty of the individual and ensemble classifier could be used to assess the learning capability. We present a novel approach for novel detection using ensemble classification and evidence theory. A pool selection strategy is presented to build a solid ensemble classifier. We present an architecture for multi-class ensemble classification and an approach to quantify the uncertainty of the individual classifiers and the ensemble classifier. We use uncertainty for the anomaly detection approach. Finally, we use the benchmark Tennessee Eastman to perform experiments to test the ensemble classifier's prediction and anomaly detection capabilities.
translated by 谷歌翻译
Riemannian geometry provides powerful tools to explore the latent space of generative models while preserving the inherent structure of the data manifold. Lengths, energies and volume measures can be derived from a pullback metric, defined through the immersion that maps the latent space to the data space. With this in mind, most generative models are stochastic, and so is the pullback metric. Manipulating stochastic objects is strenuous in practice. In order to perform operations such as interpolations, or measuring the distance between data points, we need a deterministic approximation of the pullback metric. In this work, we are defining a new metric as the expected length derived from the stochastic pullback metric. We show this metric is Finslerian, and we compare it with the expected pullback metric. In high dimensions, we show that the metrics converge to each other at a rate of $\mathcal{O}\left(\frac{1}{D}\right)$.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
在医学图像分析中需要进行几次学习的能力是对支持图像数据的有效利用,该数据被标记为对新类进行分类或细分新类,该任务否则需要更多的培训图像和专家注释。这项工作描述了一种完全3D原型的几种分段算法,因此,训练有素的网络可以有效地适应培训中缺乏的临床有趣结构,仅使用来自不同研究所的几个标记图像。首先,为了弥补机构在新型类别的情节适应中的广泛认识的空间变异性,新型的空间注册机制被整合到原型学习中,由分割头和空间对齐模块组成。其次,为了帮助训练观察到的不完美比对,提出了支持掩模调节模块,以进一步利用支持图像中可用的注释。使用589个骨盆T2加权MR图像的数据集分割了八个对介入计划的解剖结构的应用,该实验是针对介入八个机构的八个解剖结构的应用。结果证明了3D公式中的每种,空间登记和支持掩模条件的功效,所有这些条件都独立或集体地做出了积极的贡献。与先前提出的2D替代方案相比,不管支持数据来自相同还是不同的机构,都具有统计学意义的少量分割性能。
translated by 谷歌翻译
了解人类流动性对于智慧城市和社会行为研究的发展至关重要。人类流动模型可用于许多应用,包括大流行控制,城市规划和交通管理。现有模型的预测用户移动性模式的准确性小于25%。人类运动的灵活本质可以证明低精度可能是合理的。确实,人类的日常运动并不僵化。此外,严格的移动性模型可能会导致用户记录中的隐藏规律性。因此,我们提出了一种新的观点,以研究和分析人类的迁移率模式并捕获其灵活性。通常,迁移率模式由一系列位置表示。我们建议通过将这些位置抽象成一组位置来定义移动性模式。标记这些位置将使我们能够检测到接近现实的隐藏模式。我们提出IMAP,这是一种单独的人类流动性模式可视化平台。我们的平台使用户可以根据历史记录可视化他们所访问的位置的图。此外,我们的平台显示使用修改后的前缀方法计算出的最频繁的移动性模式。
translated by 谷歌翻译
自我监督的对比表示学习提供了从未标记的医学数据集中学习有意义的视觉表示的优势,以进行转移学习。但是,将当前的对比度学习方法应用于医疗数据而不考虑其特定区域的解剖学特征可能会导致视觉表示,这些视觉表示在外观和语义上是不一致的。在本文中,我们建议通过解剖学对比度学习(AWCL)改善医学图像的视觉表示,该学习结合了解剖学信息,以以对比度学习方式增强正/阴性对采样。为自动化的胎儿超声成像任务展示了所提出的方法,从而使从解剖学上相似的相同或不同的超声扫描实现了正对,这些扫描在解剖学上相似,可以将其拉在一起,从而改善了表示的学习。我们从经验上研究了与粗粒和细粒度的粒度纳入解剖信息的效果,以进行对比学习,并发现使用细粒度的解剖学信息的学习能够保留阶层内差异比其对应物更有效。我们还分析了解剖比对我们的AWCL框架的影响,发现使用更独特但解剖学上的样品构成阳性对的影响会带来更好的质量表示。大规模胎儿超声数据集的实验表明,我们的方法对学习表征有效,可以很好地转移到三个临床下游任务,并且与受监督的Imagenet和当前的先进对比度学习方法相比,取得了优越的性能。特别是,在跨域分割任务上,AWCL的表现优于Imagenet监督方法,高于13.8%,基于最先进的对比度方法的方法为7.1%。
translated by 谷歌翻译
标记和维护商业声音效果库是一项耗时的任务,这些任务被数据库不断增长并经历分类法更新而加剧。此外,不均匀的元数据使声音搜索和分类学创建变得复杂,即使引入了新的行业标准,即通用类别系统,也是一个不懈的问题。为了解决这些问题并克服依赖于数据集的限制,抑制了深度学习模型的成功培训,我们追求代表性学习来培训可用于多种声音效应库的广义嵌入,并且是声音的分类法敏捷表示。我们表明,特定于任务但独立于数据集的表示可以成功地解决数据问题,例如类不平衡,不一致的类标签和数据集大小不足,超过了诸如OpenL3之类的已建立表示的表示。详细的实验结果表明,度量学习方法和不同的跨数据库训练方法对代表性有效性的影响。
translated by 谷歌翻译